Goto

Collaborating Authors

 explainable ml


Review for NeurIPS paper: Towards Interpretable Natural Language Understanding with Explanations as Latent Variables

Neural Information Processing Systems

Weaknesses: My main concern is about how explanations are being employed as latent variables. I had assumed based on the introduction that the final predictor would factor through the final explanation. This would provide the faithfulness guarantee that two inputs which produce the same explanation would produce the same output label. However, it seems that during training, the explanation is conditioned on the gold label. The paper points out on L161 that "generating explanations without a predicted label often results in irrelevant and even misleading explanations."


Interpretability and Explainability: A Machine Learning Zoo Mini-tour

Marcinkevičs, Ričards, Vogt, Julia E.

arXiv.org Artificial Intelligence

In this literature review, we provided a survey of interpretable and explainable machine learning methods (see Tables 1 and 2 for the summary of the techniques), described commonest goals and desiderata for these techniques, motivated their relevance in several fields of application, and discussed their quantitative evaluation. Interpretability and explainability still remain an active area of research, especially, in the face of recent rapid progress in designing highly performant predictive models and inevitable infusion of machine learning into other domains, where decisions have far-reaching consequences. For years the field has been challenged by a lack of clear definitions for interpretability or explainability, these terms being often wielded "in a quasi-mathematical way"[6,122]. For many techniques, there still exist no satisfactory functionally-grounded evaluation criteria and universally accepted benchmarks, hindering reproducibility and model comparison. Moreover, meaningful adaptations of these methods to'real-world' machine learning systems and data analysis problems largely remain a matter for the future. It has been argued that, for successful and widespread use of interpretable and explainable machine learning models, stakeholders need to be involved in the discussion[4, 122]. A meaningful and equal collaboration between machine learning researchers and stakeholders from various domains, such as medicine, natural sciences, and law, is a logical next step within the evolution of interpretable and explainable ML.


Pitfalls of Explainable ML: An Industry Perspective

Verma, Sahil, Lahiri, Aditya, Dickerson, John P., Lee, Su-In

arXiv.org Artificial Intelligence

As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called ``Explainable AI (XAI)'' or ``Explainable ML.'' The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML.


Best of arXiv.org for AI, Machine Learning, and Deep Learning – March 2021 - insideBIGDATA

#artificialintelligence

Researchers from all over the world contribute to this repository as a prelude to the peer review process for publication in traditional journals. The articles listed below represent a small fraction of all articles appearing on the preprint server. They are listed in no particular order with a link to each paper along with a brief overview. Links to GitHub repos are provided when available. Especially relevant articles are marked with a "thumbs up" icon.


Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

#artificialintelligence

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs).


Hardware Acceleration of Explainable Machine Learning using Tensor Processing Units

Pan, Zhixin, Mishra, Prabhat

arXiv.org Artificial Intelligence

Machine learning (ML) is successful in achieving human-level performance in various fields. However, it lacks the ability to explain an outcome due to its black-box nature. While existing explainable ML is promising, almost all of these methods focus on formatting interpretability as an optimization problem. Such a mapping leads to numerous iterations of time-consuming complex computations, which limits their applicability in real-time applications. In this paper, we propose a novel framework for accelerating explainable ML using Tensor Processing Units (TPUs). The proposed framework exploits the synergy between matrix convolution and Fourier transform, and takes full advantage of TPU's natural ability in accelerating matrix computations. Specifically, this paper makes three important contributions. (1) To the best of our knowledge, our proposed work is the first attempt in enabling hardware acceleration of explainable ML using TPUs. (2) Our proposed approach is applicable across a wide variety of ML algorithms, and effective utilization of TPU-based acceleration can lead to real-time outcome interpretation. (3) Extensive experimental results demonstrate that our proposed approach can provide an order-of-magnitude speedup in both classification time (25x on average) and interpretation time (13x on average) compared to state-of-the-art techniques.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

Ahmad, Kashif, Maabreh, Majdi, Ghaly, Mohamed, Khan, Khalil, Qadir, Junaid, Al-Fuqaha, Ala

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Machine Learning Explainability for External Stakeholders

Bhatt, Umang, Andrus, McKane, Weller, Adrian, Xiang, Alice

arXiv.org Artificial Intelligence

As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.


An Information-Theoretic Approach to Explainable Machine Learning

Jung, Alexander

arXiv.org Machine Learning

A key obstacle to the successful deployment of machine learning (ML) methods to important application domains is the (lack of) explainability of predictions. Explainable ML is challenging since explanations must be tailored (personalized) to individual users with varying backgrounds. On one extreme, users can have received graduate level education in machine learning while on the other extreme, users might have no formal education in linear algebra. Linear regression with few features might be perfectly interpretable for the first group but must be considered a black-box for the latter. Using a simple probabilistic model for the predictions and user knowledge, we formalize explainable ML using information theory. Providing an explanation is then considered as the task of reducing the "surprise" incurred by a prediction. Moreover, the effect of an explanation is measured by the conditional mutual information between the explanation and prediction, given the user background.


Best Machine Learning Research of 2019

#artificialintelligence

The field of machine learning has continued to accelerate through 2019, moving at light speed with compelling new results coming out of academia and the research arms of large tech firms like Google, Microsoft, Yahoo, Facebook and many more. It's a daunting task for the down-in-the-trenches data scientist to keep pace. I advise my data science students at UCLA to be up on the latest research results in order to keep ahead of the pack. I recount how industry luminary Andrew Ng keeps his head above water by toting around a file of research papers (so when he has a free moment, like riding on an Uber, he can consume part of a paper). It does take time to add the research realm to your everyday duties, but I think it's fun to know what technologies are fertile areas of research.